skip to main content


Search for: All records

Creators/Authors contains: "Sun, Hongyue"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    Teeth scans are essential for many applications in orthodontics, where the teeth structures are virtualized to facilitate the design and fabrication of the prosthetic piece. Nevertheless, due to the limitations caused by factors such as viewing angles, occlusions, and sensor resolution, the 3D scanned point clouds (PCs) could be noisy or incomplete. Hence, there is a critical need to enhance the quality of the teeth PCs to ensure a suitable dental treatment. Toward this end, we propose a systematic framework including a two-step data augmentation (DA) technique to augment the limited teeth PCs and a hybrid deep learning (DL) method to complete the incomplete PCs. For the two-step DA, we first mirror and combine the PCs based on the bilateral symmetry of the human teeth and then augment the PCs based on an iterative generative adversarial network (GAN). Two filters are designed to avoid the outlier and duplicated PCs during the DA. For the hybrid DL, we first use a deep autoencoder (AE) to represent the PCs. Then, we propose a hybrid approach that selects the best completion to the teeth PCs from AE and a reinforcement learning (RL) agent-controlled GAN. Ablation study is performed to analyze each component’s contribution. We compared our method with other benchmark methods including point cloud network (PCN), cascaded refinement network (CRN), and variational relational point completion network (VRC-Net), and demonstrated that the proposed framework is suitable for completing teeth PCs with good accuracy over different scenarios.

     
    more » « less
    Free, publicly-accessible full text available August 1, 2024
  2. Abstract

    Inkjet printing (IJP) is one of the promising additive manufacturing techniques that yield many innovations in electronic and biomedical products. In IJP, the products are fabricated by depositing droplets on substrates, and the quality of the products is highly affected by the droplet pinch-off behaviors. Therefore, identifying pinch-off behaviors of droplets is critical. However, annotating the pinch-off behaviors is burdensome since a large amount of images of pinch-off behaviors can be collected. Active learning (AL) is a machine learning technique which extracts human knowledge by iteratively acquiring human annotation and updating the classification model for the pinch-off behaviors identification. Consequently, a good classification performance can be achieved with limited labels. However, during the query process, the most informative instances (i.e., images) are varying and most query strategies in AL cannot handle these dynamics since they are handcrafted. Thus, this paper proposes a multiclass reinforced active learning (MCRAL) framework in which a query strategy is trained by reinforcement learning (RL). We designed a unique intrinsic reward signal to improve the classification model performance. Moreover, how to extract the features from images for pinch-off behavior identification is not trivial. Thus, we used a graph convolutional network for droplet image feature extraction. The results show that MCRAL excels AL and can reduce human efforts in pinch-off behavior identification. We further demonstrated that, by linking the process parameters to the predicted droplet pinch-off behaviors, the droplet pinch-off behavior can be adjusted based on MCRAL.

     
    more » « less
    Free, publicly-accessible full text available July 1, 2024
  3. Abstract

    Inkjet printing (IJP) is an additive manufacturing process capable to produce intricate functional structures. The IJP process performance and the quality of the printed parts are considerably affected by the deposited droplets’ volume. Obtaining consistent droplets volume during the process is difficult to achieve because the droplets are prone to variations due to various material properties, process parameters, and environmental conditions. Experimental (i.e., IJP setup observations) and computational (i.e., computational fluid dynamics (CFD)) analysis are used to study the droplets variability; however, they are expensive and computationally inefficient, respectively. The objective of this paper is to propose a framework that can perform fast and accurate droplet volume predictions for unseen IJP driving voltage regimes. A two-step approach is adopted: (1) an emulator is constructed from the physics-based droplet volume simulations to overcome the computational complexity and (2) the emulator is calibrated by incorporating the experimental IJP observations. In particular, a scaled Gaussian stochastic process (s-GaSP) is deployed for the emulation and calibration. The resulting surrogate model is able to rapidly and accurately predict the IJP droplets volume. The proposed methodology is demonstrated by calibrating the simulated data (i.e., CFD droplet simulations) emulator with experimental data from two distinct materials, namely glycerol and isopropyl alcohol.

     
    more » « less
    Free, publicly-accessible full text available June 12, 2024
  4. null (Ed.)
  5. null (Ed.)
    Abstract

    Electrospinning is a promising process to fabricate functional parts from macrofibers and nanofibers of bio-compatible materials including collagen, polylactide (PLA), and polyacrylonitrile (PAN). However, the functionality of the produced parts highly rely on quality, repeatability, and uniformity of the electrospun fibers. Due to the variations in material composition, process settings, and ambient conditions, the process suffers from large variations. In particular, the fiber formation in the stable regime (i.e., Taylor cone and jet) and its propagation to the substrate plays the most significant role in the process stability. This work aims to designing a fast process monitoring tool from scratch for monitoring the dynamic electrospinning process based on the Taylor cone and jet videos. Nevertheless, this is challenging since the videos are of high frequency and high dimension, and the monitoring statistics may not have a parametric distribution. To achieve this goal, a framework integrating image analysis, sketch-based tensor decomposition, and non-parametric monitoring, is proposed. In particular, we use Tucker tensor-sketch (Tucker-TS) based tensor decomposition to extract the sparse structure representations of the videos. Additionally, the extracted monitoring variables are non-normally distributed, hence non-parametric bootstrap Hotelling T2 control chart is deployed to handle this issue during the monitoring. The framework is demonstrated by electrospinning a PAN-based polymeric solution. Finally, it is demonstrated that the proposed framework, which uses Tucker-TS, largely outperformed the computational speed of the alternating least squares (ALS) approach for the Tucker tensor decomposition, i.e., Tucker-ALS, in various anomaly detection tasks while keeping the comparable anomaly detection accuracy.

     
    more » « less
  6. null (Ed.)
  7. null (Ed.)
  8. null (Ed.)
    Abstract

    Inkjet 3D printing has broad applications in areas such as health and energy due to its capability to precisely deposit micro-droplets of multi-functional materials. However, the droplet of the inkjet printing has different jetting behaviors including drop initiation, thinning, necking, pinching and flying, and they are vulnerable to disturbance from vibration, material inhomogeneity, etc. Such issues make it challenging to yield a consistent printing process and a defect-free final product with desired properties. Therefore, timely recognition of the droplet behavior is critical for inkjet printing quality assessment. In-situ video monitoring of the printing process paves a way for such recognition. In this paper, a novel feature identification framework is presented to recognize the spatiotemporal feature of in-situ monitoring videos for inkjet printing. Specifically, a spatiotemporal fusion network is used for droplet printing behavior classification. The categories are based on inkjet printability, which is related to both the static features (ligament, satellite, and meniscus) and dynamic features (ligament thinning, droplet pinch off, meniscus oscillation). For the recorded droplet jetting video data, two streams of networks, the frames sampled from video in spatial domain (associated with static features) and the optical flow in temporal domain (associated with dynamic features), are fused in different ways to recognize the droplet evolving behavior. Experiments results show that the proposed fusion network can recognize the droplet jetting behavior in the complex printing process and identify its printability with learned knowledge, which can ultimately enable the real-time inkjet printing quality control and further provide guidance to design optimal parameter settings for the inkjet printing process.

     
    more » « less